Goto

Collaborating Authors

 Providence


Initialization-Aware Score-Based Diffusion Sampling

Fassina, Tiziano, Cardoso, Gabriel, Corff, Sylvan Le, Romary, Thomas

arXiv.org Machine Learning

Score-based generative models (SGMs) aim at generating samples from a target distribution by approximating the reverse-time dynamics of a stochastic differential equation. Despite their strong empirical performance, classical samplers initialized from a Gaussian distribution require a long time horizon noising typically inducing a large number of discretization steps and high computational cost. In this work, we present a Kullback-Leibler convergence analysis of Variance Exploding diffusion samplers that highlights the critical role of the backward process initialization. Based on this result, we propose a theoretically grounded sampling strategy that learns the reverse-time initialization, directly minimizing the initialization error. The resulting procedure is independent of the specific score training procedure, network architecture, and discretization scheme. Experiments on toy distributions and benchmark datasets demonstrate competitive or improved generative quality while using significantly fewer sampling steps.





Latent SDEs on Homogeneous Spaces

Neural Information Processing Systems

We consider the problem of variational Bayesian inference in a latent variable model where a (possibly complex) observed stochastic process is governed by the solution of a latent stochastic differential equation (SDE).


State-space Models with Layer-wise Nonlinearity are Universal Approximators with Exponential Decaying Memory

Neural Information Processing Systems

State-space models have gained popularity in sequence modelling due to their simple and efficient network structures. However, the absence of nonlinear activation along the temporal direction limits the model's capacity.



Can neural operators always be continuously discretized? Takashi Furuya

Neural Information Processing Systems

We consider the problem of discretization of neural operators between Hilbert spaces in a general framework including skip connections. We focus on bijec-tive neural operators through the lens of diffeomorphisms in infinite dimensions.